53 research outputs found

    Trends in Phishing Attacks: Suggestions for Future Research

    Get PDF
    Deception in computer-mediated communication is a widespread phenomenon. Cyber criminals are exploiting technological mediums to communicate with potential targets as these channels reduce both the deception cues and the risk of detection itself. A prevalent deception-based attack in computer-mediated communication is phishing. Prior phishing research has addressed the “bait” and “hook” components of phishing attacks, the human-computer interaction that takes place as users judge the veracity of phishing emails and websites, and the development of technologies that can aid users in identifying and rejecting these attacks. Despite the extant research on this topic, phishing attacks continue to be successful as tactics evolve rendering existing research less relevant, and users disregard the recommendations of automated phishing tools. This paper summarizes the core of phishing research, provides an update on trending attack methods, and proposes future research addressing computer credibility in a phishing context

    Conversational Agents, Conversational Relevance, and Disclosure: Comparing the Effectiveness of Chatbots and SVITs in Eliciting Sensitive Information

    Get PDF
    Conversational agents (CAs) in various forms are used in a variety of information systems. An abundance of prior research has focused on evaluating the various traits that make CAs effective. Most studies assume, however, that increasing the anthropomorphism of an agent will improve its performance. In a sensitive information disclosure task, that may not always be the case. We leverage self disclosure, social desirability, and social presence theories to predict how differing modes of conversational agents affect information disclosure. In this paper, we propose a laboratory experiment to compare how the mode of a given CA text based chatbot or voice based smart speaker paired with either high or low levels of conversational relevance, affects the disclosure of personally sensitive information. In addition to understanding influences on disclosure, we aim to break down the mechanisms through which CA design influences disclosure

    FACTORS FOR ASSESSING PROSPECTIVE DOCTORAL APPLICANT READINESS

    Get PDF
    Doctoral program admissions management is universal yet rarely addressed in Information Systems literature. Readiness, a concept well-established in educational theory, is a compelling theory upon which to further professionalize the process of reviewing application portfolios. We propose a literature-based, extensible assessment rubric for reviewing doctoral program applicant materials based on the concept of research readiness. The rubrics pay particular attention to universal competencies required for progressing from students to future IS research professionals. Unifying assessment standards for doctoral admissions facilitates faculty decision-making, while creating clear standards for prospective candidates on expectations for minimum requirements

    Countermeasures and Eye Tracking Deception Detection

    Get PDF
    A new development in the field of deception detection is been the development of rapid, noncontact tools for automated detection. This research in progress paper describes a method for assessing the robustness of eye tracker-based deception detection to countermeasures employed by knowledgeable participants

    Real-time Embodied Agent Adaptation

    Get PDF
    This paper reports on initial investigation of two emerging technologies, FaceFX and Smartbody, capable of creating life-like animations for embodied conversational agents (ECAs) such as the AVATAR agent. Real-time rendering and animation generation technologies can enable rapid adaptation of ECAs to changing circumstances. The benefits of each package are discussed

    Design of a Chatbot Social Engineering Victim

    Get PDF
    Social engineering is an ever-growing problem in online and offline communication. Companies invest time and resources to train employees not to fall victim to attacks. The concept of adversarial thinking encourages people to learn the ways of the attacker to better defend themselves. This research introduces the design features of a chatbot that plays the role of a social engineering victim to allow people to perform the role of an attacker in a training exercise. By attacking this chatbot, people can learn better how to defend themselves

    Natural-Setting PHR Usability Evaluation using the NASA TLX to Measure Cognitive Load of Patients

    Get PDF
    While personal health records (PHRs) carry an array of potential benefits such as increased patient engagement, poor usability remains a significant barrier to patients’ adoption of PHRs. In this mixed methods study, we evaluate the usability of one PHR feature, an intake form called the pre-visit summary, from the perspective of cognitive load using real cardiovascular patients in a natural setting. A validated measure for cognitive load, the NASA Task Load Index, was used along with retrospective interviews to identify tasks within the pre-visit summary that increased participants’ cognitive load. We found that the medications, immunizations, active health concerns, and family history pages induced a higher cognitive load because participants struggled to recall personal health information and also due to user interface design issues. This research is significant in that it uses validated measures of cognitive load to study real patients interacting with their PHR in a natural environment

    Facilitating Natural Conversational Agent Interactions: Lessons from a Deception Experiment

    Get PDF
    This study reports the results of a laboratory experiment exploring interactions between humans and a conversational agent. Using the ChatScript language, we created a chat bot that asked participants to describe a series of images. The two objectives of this study were (1) to analyze the impact of dynamic responses on participants’ perceptions of the conversational agent, and (2) to explore behavioral changes in interactions with the chat bot (i.e. response latency and pauses) when participants engaged in deception. We discovered that a chat bot that provides adaptive responses based on the participant’s input dramatically increases the perceived humanness and engagement of the conversational agent. Deceivers interacting with a dynamic chat bot exhibited consistent response latencies and pause lengths while deceivers with a static chat bot exhibited longer response latencies and pause lengths. These results give new insights on social interactions with computer agents during truthful and deceptive interactions
    corecore